Augmented Lagrangian Methods and Proximal Point Methods for Convex Optimization

نویسنده

  • A. N. Iusem
چکیده

We present a review of the classical proximal point method for nding zeroes of maximal monotone operators, and its application to augmented Lagrangian methods, including a rather complete convergence analysis. Next we discuss the generalized proximal point methods, either with Bregman distances or -divergences, which in turn give raise to a family of generalized augmented Lagrangians, as smooth in the primal variables as the data functions are. We give a sketch of the convergence analysis for the case of the proximal point method with Bregman distances for variational inequality problems. The di culty with these generalized augmented Lagrangians lies in establishing optimality of the cluster points of the primal sequence, which is rather immediate in the classical case. In connection with this issue we present two results. First we prove optimality of such cluster points under a strict complementarity assumption (basically that no tight constraint is redundant at any solution). In the absence of this assumption, we establish an ergodic convergence result, namely optimality of the cluster points of a sequence of weighted averages of the primal sequence given by the method, improving over weaker ergodic results previously known. Finally we discuss similar ergodic results for the augmented Lagrangian method with -divergences and give the explicit formulae of generalized augmented Lagrangian methods for di erent choices of the Bregman distances and the -divergences.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Generalized Proximal-Point Method for Convex Optimization Problems in Hilbert Spaces∗

We deal with a generalization of the proximal-point method and the closely related Tikhonov regularization method for convex optimization problems. The prime motivation behind this is the well-known connection between the classical proximal-point and augmented Lagrangian methods, and the emergence of modified augmented Lagrangian methods in recent years. Our discussion includes a formal proof o...

متن کامل

Fast Multiplier Methods to Optimize Non-exhaustive, Overlapping Clustering

Clustering is one of the most fundamental and important tasks in data mining. Traditional clustering algorithms, such as K-means, assign every data point to exactly one cluster. However, in real-world datasets, the clusters may overlap with each other. Furthermore, often, there are outliers that should not belong to any cluster. We recently proposed the NEO-K-Means (Non-Exhaustive, Overlapping ...

متن کامل

Proximal Point Nonlinear Rescaling Method for Convex Optimization

Nonlinear rescaling (NR) methods alternate finding an unconstrained minimizer of the Lagrangian for the equivalent problem in the primal space (which is an infinite procedure) with Lagrange multipliers update. We introduce and study a proximal point nonlinear rescaling (PPNR) method that preserves convergence and retains a linear convergence rate of the original NR method and at the same time d...

متن کامل

Augmented Lagrangian method for solving absolute value equation and its application in two-point boundary value problems

One of the most important topic that consider in recent years by researcher is absolute value equation (AVE). The absolute value equation seems to be a useful tool in optimization since it subsumes the linear complementarity problem and thus also linear programming and convex quadratic programming. This paper introduce a new method for solving absolute value equation. To do this, we transform a...

متن کامل

Numerical Comparison of Augmented Lagrangian Algorithms for Nonconvex Problems

Augmented Lagrangian algorithms are very popular tools for solving nonlinear programming problems. At each outer iteration of these methods a simpler optimization problem is solved, for which efficient algorithms can be used, especially when the problems are large. The most famous Augmented Lagrangian algorithm for minimization with inequality constraints is known as Powell-Hestenes-Rockafellar...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1999